5 research outputs found

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications

    Hearing-impaired bio-inspired cochlear models for real-time auditory applications

    Get PDF
    Biophysically realistic models of the cochlea are based on cascaded transmission-line (TL) models which capture longitudinal coupling, cochlear nonlinearities, as well as the human frequency selectivity. However, these models are slow to compute (order of seconds/minutes) while machine-hearing and hearing-aid applications require a real-time solution. Consequently, real-time applications often adopt more basic and less time-consuming descriptions of cochlear processing (gamma-tone, dual resonance nonlinear) even though there are clear advantages in using more biophysically correct models. To overcome this, we recently combined nonlinear Deep Neural Networks (DNN) with analytical TL cochlear model descriptions to build a real-time model of cochlear processing which captures the biophysical properties associated with the TL model. In this work, we aim to extend the normal-hearing DNN-based cochlear model (CoNNear) to simulate frequency-specific patterns of hearing sensitivity loss, yielding a set of normal and hearing-impaired auditory models which can be computed in real-time and are differentiable. They can hence be used in backpropagation networks to develop the next generation of hearing-aid and machine hearing applications
    corecore